Emotions play an important role in interpersonal interactions and social conflict, yet their function in the development of controversy and disagreement in online conversations has not been explored. To address this gap, we study controversy on Reddit, a popular network of online discussion forums. We collect discussions from a wide variety of topical forums and use emotion detection to recognize a range of emotions from text, including anger, fear, joy, admiration, etc. Our study has three main findings. First, controversial comments express more anger and less admiration, joy and optimism than non-controversial comments. Second, controversial comments affect emotions of downstream comments in a discussion, usually resulting in long-term increase in anger and a decrease in positive emotions, although the magnitude and direction of emotional change depends on the forum. Finally, we show that emotions help better predict which comments will become controversial. Understanding emotional dynamics of online discussions can help communities to better manage conversations.
translated by 谷歌翻译
Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships. To address this weakness, we propose a new model, MixHop, that can learn these relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. MixHop requires no additional memory or computational complexity, and outperforms on challenging baselines. In addition, we propose sparsity regularization that allows us to visualize how the network prioritizes neighborhood information across different graph datasets. Our analysis of the learned architectures reveals that neighborhood mixing varies per datasets. 1 We use "like", as graph edges are not axis-aligned.
translated by 谷歌翻译
The demonstrated success of transfer learning has popularized approaches that involve pretraining models from massive data sources and subsequent finetuning towards a specific task. While such approaches have become the norm in fields such as natural language processing, implementation and evaluation of transfer learning approaches for chemistry are in the early stages. In this work, we demonstrate finetuning for downstream tasks on a graph neural network (GNN) trained over a molecular database containing 2.7 million water clusters. The use of Graphcore IPUs as an AI accelerator for training molecular GNNs reduces training time from a reported 2.7 days on 0.5M clusters to 1.2 hours on 2.7M clusters. Finetuning the pretrained model for downstream tasks of molecular dynamics and transfer to a different potential energy surface took only 8.3 hours and 28 minutes, respectively, on a single GPU.
translated by 谷歌翻译
In a scenario with multiple persons talking simultaneously, the spatial characteristics of the signals are the most distinct feature for extracting the target signal. In this work, we develop a deep joint spatial-spectral non-linear filter that can be steered in an arbitrary target direction. For this we propose a simple and effective conditioning mechanism, which sets the initial state of the filter's recurrent layers based on the target direction. We show that this scheme is more effective than the baseline approach and increases the flexibility of the filter at no performance cost. The resulting spatially selective non-linear filters can also be used for speech separation of an arbitrary number of speakers and enable very accurate multi-speaker localization as we demonstrate in this paper.
translated by 谷歌翻译
矩阵函数可用于重写光滑光谱约束的矩阵优化问题,因为在一组对称矩阵的集合中,不受限制的问题,然后通过立方规范化的牛顿方法求解。事实证明,矩阵函数的二阶链条规则身份可以计算高阶导数以实现立方规范化的牛顿,并为矩阵矢量空间的立方调节牛顿提供了新的收敛分析。我们通过在合成数据集和真实数据集上进行数值实验来证明我们的方法的适用性。在我们的实验中,我们制定了一个新的模型,以估算泰勒的M-估计器(TME)模型的精神估算公平和稳健的协方差矩阵并证明其优势。
translated by 谷歌翻译
使用多个麦克风进行语音增强的主要优点是,可以使用空间滤波来补充节奏光谱处理。在传统的环境中,通常单独执行线性空间滤波(波束形成)和单通道后过滤。相比之下,采用深层神经网络(DNN)有一种趋势来学习联合空间和速度 - 光谱非线性滤波器,这意味着对线性处理模型的限制以及空间和节奏单独处理的限制光谱信息可能可以克服。但是,尚不清楚导致此类数据驱动的过滤器以良好性能进行多通道语音增强的内部机制。因此,在这项工作中,我们通过仔细控制网络可用的信息源(空间,光谱和时间)来分析由DNN实现的非线性空间滤波器的性质及其与时间和光谱处理的相互依赖性。我们确认了非线性空间处理模型的优越性,该模型在挑战性的扬声器提取方案中优于Oracle线性空间滤波器,以低于0.24的POLQA得分,较少数量的麦克风。我们的分析表明,在特定的光谱信息中应与空间信息共同处理,因为这会提高过滤器的空间选择性。然后,我们的系统评估会导致一个简单的网络体系结构,该网络体系结构在扬声器提取任务上的最先进的网络体系结构优于0.22 POLQA得分,而CHIME3数据上的POLQA得分为0.32。
translated by 谷歌翻译
采用深层神经网络(DNN)直接学习多通道语音增强的过滤器,这可能是将线性空间过滤器与独立的节奏光谱后过滤器相结合的传统方法的两个关键优势:1)非线性空间过滤器克服源自线性处理模型的潜在限制和2)空间和速度光谱信息的关节处理可以利用不同信息来源之间的相互依赖性。最近提出了各种基于DNN的非线性过滤器,报告了良好的增强性能。但是,对于将网络体系结构设计变成机会游戏的内部机制知之甚少。因此,在本文中,我们执行实验,以更好地了解基于DNN的非线性过滤器对空间,光谱和时间信息的内部处理。一方面,我们在艰难的语音提取方案中的实验证实了非线性空间滤波的重要性,该空间过滤的重要性超过了Oracle线性空间滤波器,高于0.24 POLQA得分。另一方面,我们证明了联合处理导致较大的性能差距,除了空间信息之外,在利用光谱与时间信息的网络体系结构之间得分为0.4 POLQA得分。
translated by 谷歌翻译
我们提出了一种新型的二次编程公式,用于估计群体同步中的损坏水平,并使用这些估计来解决此问题。我们的目标函数利用了组的循环一致性,因此我们将我们的方法称为结构一致性(DESC)的检测和估计。该一般框架可以扩展到其他代数和几何结构。我们的表述具有以下优势:它可以忍受与信息理论界限一样高的腐败,它不需要对小组元素的估计值进行良好的初始化,它具有简单的解释,在某些温和的条件下,我们的全球最小值目标函数准确恢复了腐败水平。我们证明了方法在旋转平均的合成和真实数据实验上的竞争精度。
translated by 谷歌翻译
我们介绍了以实体为中心的查询精炼的任务。给定一个输入查询,其答案是(潜在的)实体集合,任务输出是一小部分查询精炼,旨在帮助用户进行有效的域探索和实体发现。我们提出了一种为此任务创建培训数据集的方法。对于给定的输入查询,我们使用现有的知识基础分类法作为候选查询改进的来源,并使用旨在将回答输入查询的实体集的搜索过程中的搜索过程中的一组搜索过程中选择。我们证明我们的方法确定了人类注释者认为有趣,全面和不冗余的精炼集。此外,我们发现,在新结构的数据集中训练的文本生成模型能够为现有分类法所没有涵盖的新型查询提供改进。我们的代码和数据可在https://github.com/google-research/language/tree/master/master/language/qresp上找到。
translated by 谷歌翻译
图像生成模型可以学习训练数据的分布,因此可以通过从这些分布中取样来生成示例。但是,当培训数据集被离群值损坏时,生成模型可能会产生与异常值相似的示例。实际上,一小部分离群值可能会诱导最新的生成模型,例如量化量化量化自动编码器(VQ-VAE),以从异常值中学习重要的模式。为了减轻此问题,我们提出了一个基于VQ-VAE的强大生成模型,我们将其命名为强大的VQ-VAE(RVQ-VAE)。为了实现鲁棒性,RVQ-VAE使用两个单独的代码簿对嵌入式和离群值。为了确保代码簿嵌入正确的组件,我们在每个培训时期内迭代更新嵌入式和异常值的集合。为了确保编码的数据点与正确的代码簿匹配,我们使用加权欧几里得距离进行量化,其权重由代码簿的定向差异确定。这两个代码手册均与编码器和解码器一起根据重建损失和量化损失共同训练。我们在实验上证明,即使大部分训练数据点损坏了RVQ-VAE,即使大部分培训数据都可以从嵌入式中产生示例。
translated by 谷歌翻译